Troubleshooting
Wire Data modular input stops working after upgrade
After manually deleting the application folders without stopping Splunk and installing or upgrading the app, the Wire Data modular input stops working. Some of the symptoms include:
- The Wire Data modular input configuration (
splunk_app_stream
location) is not present. - Wire data is not present in the data input.
- The Wire Data configuration is present, but enabling
streamfwd
in the UI has no effect.
For symptoms 1 and 2 above, a restart of Splunk might fix the issue. Otherwise, follow this workaround:
- Stop Splunk.
cd $SPLUNK_HOME/bin ./splunk stop
- In
$SPLUNK_HOME/etc/apps
, deletesplunk_app_stream
andSplunk_TA_Stream
folders. - Start Splunk.
cd $SPLUNK_HOME/bin ./splunk start
- In Splunk Web, reinstall Splunk Stream.
- Restart Splunk from the UI.
- Open Settings > Data inputs.
- Click Enable.
The Wire Data modular input now appears in the UI.
Stop Splunk before you delete either splunk_app_stream
or Splunk_TA_Stream directories
.
Wire Data modular input fails to start on Linux
- Check splunkd.log for the following error message:
Unable to initialize modular input "streamfwd" defined inside the app "Splunk_TA_stream": Introspecting scheme=streamfwd: script running failed (killed by signal 6: Aborted)
- Run the
Splunk_TA_stream/linux_x86_64/bin/streamfwd --version
command from the CLI and see if it results in the following output:/opt/splunkforwarder/etc/apps/Splunk_TA_stream/linux_x86_64/bin/streamfwd --version terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid Aborted (core dumped)
- Use this workaround: Set the LC_ALL locale to either "en_US.UTF-8" or "C.UTF-8":
export LC_ALL="en_US.UTF-8"
Stream forwarders do not send data
When upgrading to Splunk Stream 7.1.3 or later from a previous version, if stream forwarders do not send data after upgrade, you may see error messages.
WARN [139650313393920] (HTTPRequestSender.cpp:1485) stream.SplunkSenderHTTPEventCollector - (#7) TCP connection failed: Connection refused
To mitigate this, make sure that the stream forwarder is configured correctly. Change the HEC configuration as needed:
- Open the Distributed Forwarder Management in the Stream App.
- Select Install Stream Forwarders.
- Verify that the curl command is the one that was run on the Stream Forward App.
- Turn off the HEC Autoconfig option
- Manually add in the HEC (HF or Indexer) URL.
Forwarder fails to collect NetFlow data
Activating the configuration templates stops the collection of Netflow data on the forwarder. To mitigate this, try the following steps:
- Find the proper configuration for your NETFLOW stream in the relevant Splunk for Stream application KVStore named "streams"
- Extract that JSON configuration using a curl command from the KVStore.
- Extract the relevant configuration for NETFLOW from the JSON configuration.
- Save the configuration as a file and move it to your independent Stream forwarder under /opt/streamfwd/configs/es/
- Restart your forwarder.
How to create a pcap file
If you encounter an issue with your Splunk Stream deployment, the Stream support team might ask you to provide a pcap file for debugging purposes.
Create a pcap in Linux
Use tcpdump
to create a pcap in Linux. tcpdump
captures the first 96 bytes of data from a packet by default. To capture more data, use the -s<number>
option to set the snaplen (snapshot length), where <number>
is the number of bytes you want to capture. Use -s0
to run tcpdump
with unlimited snaplen.
tcpdump –i eth0 –s0 –w filename.pcap
For example, to capture Oracle TNS traffic only on port 1521:
tcpdump –i eth0 –s0 –w file.pcap tcp port 1521
Note: To see a list of NIC names on your server, enter tcpdump –D
.
Create a pcap in Windows
You can create a pcap in Windows using a utility such as Wireshark.
For instructions on creating a pcap file in Wireshark, see Saving captured packets.
FAQ |
This documentation applies to the following versions of Splunk Stream™: 8.0.1, 8.0.2, 8.1.0, 8.1.1, 8.1.3
Feedback submitted, thanks!